翻訳と辞書
Words near each other
・ Large Interior Form, 1953-54
・ Large Interior with Three Reflections
・ Large intestine
・ Large intestine (Chinese medicine)
・ Large irregular activity
・ Large Island
・ Large Japanese field mouse
・ Large kelpfish
・ Large Lakes Observatory
・ Large Latin American Millimeter Array
・ Large Lifou white-eye
・ Large Low Shear Velocity Provinces
・ Large Luzon forest rat
・ Large Magellanic Cloud
・ Large Marge
Large margin nearest neighbor
・ Large marine ecosystem
・ Large milkweed bug
・ Large Millimeter Telescope
・ Large Mindanao roundleaf bat
・ Large Mindoro forest mouse
・ Large mole
・ Large Molecule Heimat
・ Large mosaic-tailed rat
・ Large myotis
・ Large Münsterländer
・ Large New Guinea spiny rat
・ Large niltava
・ Large numbers
・ Large Optical Test and Integration Site


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Large margin nearest neighbor : ウィキペディア英語版
Large margin nearest neighbor
Large margin nearest neighbor (LMNN) classification is a statistical machine learning algorithm. It learns a Pseudometric designed for k-nearest neighbor classification. The algorithm is based on semidefinite programming, a sub-class of convex optimization.
The goal of supervised learning (more specifically classification) is to learn a decision rule that can categorize data instances into pre-defined classes. The k-nearest neighbor rule assumes a ''training'' data set of labeled instances (i.e. the classes are known). It classifies a new data instance with the class obtained from the majority vote of the k closest (labeled) training instances. Closeness is measured with a pre-defined metric. Large Margin Nearest Neighbors is an algorithm that learns this global (pseudo-)metric in a supervised fashion to improve the classification accuracy of the k-nearest neighbor rule.
==Setup==

The main intuition behind LMNN is to learn a pseudometric under which all data instances in the training set are surrounded by at least k instances that share the same class label. If this is achieved, the leave-one-out error (a special case of cross validation) is minimized. Let the training data consist of a data set D=\\subset R^d\times C, where the set of possible class categories is C=\.
The algorithm learns a pseudometric of the type
:d(\vec x_i,\vec x_j)=(\vec x_i-\vec x_j)^\top\mathbf(\vec x_i-\vec x_j).
For d(\cdot,\cdot) to be well defined, the matrix \mathbf needs to be positive semi-definite. The Euclidean metric is a special case, where \mathbf is the identity matrix. This generalization is often (falsely) referred to as Mahalanobis metric.
Figure 1 illustrates the effect of the metric under varying \mathbf. The two circles show the set of points with equal distance to the center \vec x_i. In the Euclidean case this set is a circle, whereas under the modified (Mahalanobis) metric it becomes an ellipsoid.
The algorithm distinguishes between two types of special data points: ''target neighbors'' and ''impostors''.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Large margin nearest neighbor」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.